Skip to main content

Home/ DISC Inc/ Group items tagged Google Indexing

Rss Feed Group items tagged

2More

Banner Ads & Image Ads On Google Images - 0 views

  • Oct 3, 2008 at 5:37pm Eastern by Danny Sullivan    Banner Ads & Image Ads On Google Images Earlier this week, we noted a report that Google was showing banner ads on Google Images. Now via TechCrunch, a new implementation — an AdWords ad on Google Images with its own thumbnail image. Notice in the screenshot above how a Guinness Logo is appearing next to a Target ad where NHL Buffalo merchandise is being sold on a search for buffalo logos. While a Guiness logo is being used as part of the ad isn’t clear. Here’s the ad in context: While showing an image next to ad is a big step for Google, it’s not that far removed from video they’ve tested with some ads. But an actual banner ad that they’ve tested with some ads. But an actual banner ad that SearchViews spotted on Google Images is another thing entirely: See it down at the bottom of the page? An actual banner ad. We’re checking with Google for more details. Postscript: Google sent this: As part of our ongoing commitment to innovation and to help users find new and better ways of getting the information they’re looking for, we are currently conducting a test to show ads on the results pages for Google Image Search. The experiment is restricted to U.S. advertisers who are using formats including text ads and static image ads. Display Ads Coming In Image Search from us back in May has more details on how Google said this type of test would be coming. There’s also some discussion now developing on Techmeme.
  •  
    Oct 3, 2008 at 5:37pm Eastern by Danny Sullivan Banner Ads & Image Ads On Google Images Google Images & Banner Ads Closeup Earlier this week, we noted a report that Google was showing banner ads on Google Images. Now via TechCrunch, a new implementation - an AdWords ad on Google Images with its own thumbnail image. Notice in the screenshot above how a Guinness Logo is appearing next to a Target ad where NHL Buffalo merchandise is being sold on a search for buffalo logos. While a Guiness logo is being used as part of the ad isn't clear. Here's the ad in context: Google Images & Banner Ads While showing an image next to ad is a big step for Google, it's not that far removed from video they've tested with some ads. But an actual banner ad that they've tested with some ads. But an actual banner ad that SearchViews spotted on Google Images is another thing entirely: Google Image SERPs with banner ad See it down at the bottom of the page? An actual banner ad. We're checking with Google for more details. Postscript: Google sent this: As part of our ongoing commitment to innovation and to help users find new and better ways of getting the information they're looking for, we are currently conducting a test to show ads on the results pages for Google Image Search. The experiment is restricted to U.S. advertisers who are using formats including text ads and static image ads. Display Ads Coming In Image Search from us back in May has more details on how Google said this type of test would be coming. There's also some discussion now developing on Techmeme.
2More

Google Removes Directory Links From Webmaster Guidelines - 0 views

  • Oct 3, 2008 at 9:48am Eastern by Barry Schwartz    Google Removes Directory Links From Webmaster Guidelines Brian Ussery reported that Google has dropped two important bullet points from the Google Webmaster Guidelines. Those bullet points include: Have other relevant sites link to yours. Submit your site to relevant directories such as the Open Directory Project and Yahoo!, as well as to other industry-specific expert sites. At the same time, Google Blogoscoped reported that Google removed the dictionary link in the search results, at the top right of the results page. Related, I am not sure. I speculated that maybe Google is going to go after more directories in the future. By removing those two bullet points, maybe Google can do this - without seeming all that hypocritical. In addition, I noted a comment from Google John Mueller at a Google Groups thread where he explained the logic behind removing those two points: I wouldn’t necessarily assume that we’re devaluing Yahoo’s links, I just think it’s not one of the things we really need to recommend. If people think that a directory is going to bring them lots of visitors (I had a visitor from the DMOZ once), then it’s obviously fine to get listed there. It’s not something that people have to do though :-). As you can imagine, this is causing a bit of a commotion in some of the forums. Some are worried, some are mad, and some are confused by the change.
  •  
    Oct 3, 2008 at 9:48am Eastern by Barry Schwartz Google Removes Directory Links From Webmaster Guidelines Brian Ussery reported that Google has dropped two important bullet points from the Google Webmaster Guidelines. Those bullet points include: * Have other relevant sites link to yours. * Submit your site to relevant directories such as the Open Directory Project and Yahoo!, as well as to other industry-specific expert sites. At the same time, Google Blogoscoped reported that Google removed the dictionary link in the search results, at the top right of the results page. Related, I am not sure. I speculated that maybe Google is going to go after more directories in the future. By removing those two bullet points, maybe Google can do this - without seeming all that hypocritical. In addition, I noted a comment from Google John Mueller at a Google Groups thread where he explained the logic behind removing those two points: I wouldn't necessarily assume that we're devaluing Yahoo's links, I just think it's not one of the things we really need to recommend. If people think that a directory is going to bring them lots of visitors (I had a visitor from the DMOZ once), then it's obviously fine to get listed there. It's not something that people have to do though :-). As you can imagine, this is causing a bit of a commotion in some of the forums. Some are worried, some are mad, and some are confused by the change.
1More

Google; You can put 50 words in your title tag, we'll read it | Hobo - 0 views

  •  
    Google; You can put 50 words in your title tag, we'll read it Blurb by Shaun Anderson Note - This is a test, testing Title Tags in Google. Consider also Google Title Tag Best Practice. We recently tested "how many keywords will Google read in the title tag / element?" using our simple seo mythbuster test (number 2 in the series). And here's the results, which are quite surprising. First - here's the test title tag we tried to get Google to swallow. And it did. All of it. Even though it was a bit spammy; HoboA HoboB HoboC HoboD HoboE HoboF HoboG HoboH HoboI HoboJ HoboK HoboL HoboM HoboN HoboO HoboP HoboQ HoboR HoboS HoboT HoboU HoboV HoboW HoboX HoboY Hob10 Hob20 Hob30 Hob40 Hob50 Hob60 Hob70 Hob80 Hob90 Hob11 Hob12 Hob13 Hob14 Hob15 Hob16 Hob17 Hob18 Hob19 Hob1a Hob1b Hob1c Hob1d Hob1e Hob1f Hob1g Hob1h Using a keyword search - hoboA Hob1h - we were surprised to see Google returned our page. We also tested it using - Hob1g Hob1h - the keywords right at the end of the title - and again our page was returned. So that's 51 words, and 255 characters without spaces, 305 characters with spaces, at least! It seems clear Google will read just about anything these days! ************** Update: Qwerty pointed out an interesting fact about the intitle: site operator in Google. Google results with the intitle: command…..results as expected. But next in the sequence returns the following, unexpected result….. Google results with the intitle: command So what does this tell us? Google seems to stop at the 12th word on this page at least when returning results using the intitle: site operator. Another interesting observation. Thanks Qwerty. ************** We're obviously not sure what benefit a title tag with this many keywords in it has for your page, in terms of keyword density / dilution, and "clickability" in the search engine results pages (serps). 50+ words is certainly not best practice! When creating your title tag bear in
1More

SEOmoz | Announcing SEOmoz's Index of the Web and the Launch of our Linkscape Tool - 0 views

  •  
    After 12 long months of brainstorming, testing, developing, and analyzing, the wait is finally over. Today, I'm ecstatic to announce some very big developments here at SEOmoz. They include: * An Index of the World Wide Web - 30 billion pages (and growing!), refreshed monthly, built to help SEOs and businesses acquire greater intelligence about the Internet's vast landscape * Linkscape - a tool enabling online access to the link data provided by our web index, including ordered, searchable lists of links for sites & pages, and metrics to help judge their value. * A Fresh Design - that gives SEOmoz a more usable, enjoyable, and consistent browsing experience * New Features for PRO Membership - including more membership options, credits to run advanced Linkscape reports (for all PRO members), and more. Since there's an incredible amount of material, I'll do my best to explain things clearly and concisely, covering each of the big changes. If you're feeling more visual, you can also check out our Linkscape comic, which introduces the web index and tool in a more humorous fashion: Check out the Linkscape Comic SEOmoz's Index of the Web For too long, data that is essential to the practice of search engine optimization has been inaccessible to all but a handful of search engineers. The connections between pages (links) and the relationship between links, URLs, and the web as a whole (link metrics) play a critical role in how search engines analyze the web and judge individual sites and pages. Professional SEOs and site owners of all kinds deserve to know more about how their properties are being referenced in such a system. We believe there are thousands of valuable applications for this data and have already put some effort into retrieving a few fascinating statistics: * Across the web, 58% of all links are to internal pages on the same domain, 42% point to pages off the linking site. * 1.83%
1More

Limit Anchor Text Links To 55 Characters In Length? | Hobo - 0 views

  •  
    Limit Anchor Text Links To 55 Characters In Length? Blurb by Shaun Building LinksAs a seo I wanted to know - how many words or characters does Google count in a link? What's best practice when creating links - internal, or external? What is the optimal length of a HTML link? It appears the answer to the question 'how many words in a text link" is 55 characters, about 8-10 words. Why is this important to know? 1. You get to understand how many words Google will count as part of a link 2. You can see why you should keep titles to a maximum amount of characters 3. You can see why your domain name should be short and why urls should be snappy 4. You can see why you should rewrite your urls (SEF) 5. It's especially useful especially when thinking about linking internally, via body text on a page. I wanted to see how many words Google will count in one 'link' to pass on anchor text power to a another page so I did a test a bit like this one below; 1. pointed some nonsense words in one massive link, 50 words long, at the home page of a 'trusted' site 2. each of the nonsense words were 6 characters long 3. Then I did a search for something generic that the site would rank no1 for, and added the nonsense words to the search, so that the famous "This word only appear in links to the site" (paraphrase) kicked in 4. This I surmised would let me see how many of the nonsense words Google would attribute to the target page from the massive 50 word link I tried to get it to swallow. The answer was….. 1. Google counted 8 words in the anchor text link out of a possible 50. 2. It seemed to ignore everything else after the 8th word 3. 8 words x 6 characters = 48 characters + 7 spaces = a nice round and easy to remember number - 55 Characters. So, a possible best practice in number of words in an anchor text might be to keep a link under 8 words but importantly under 55 characters because everything after it is ignored
2More

Deduping Duplicate Content - ClickZ - 0 views

  •  
    One interesting thing that came out of SES San Jose's Duplicate Content and Multiple Site Issues session in August was the sheer volume of duplicate content on the Web. Ivan Davtchev, Yahoo's lead product manager for search relevance, said "more than 30 percent of the Web is made up of duplicate content." At first I thought, "Wow! Three out of every 10 pages consist of duplicate content on the Web." My second thought was, "Sheesh, the Web is one tangled mess of equally irrelevant content." Small wonder trust and linkage play such significant roles in determining a domain's overall authority and consequent relevancy in the search engines. Three Flavors of Bleh Davtchev went on to explain three basic types of duplicate content: 1. Accidental content duplication: This occurs when Webmasters unintentionally allow content to be replicated by non-canonicalization (define), session IDs, soft 404s (define), and the like. 2. Dodgy content duplication: This primarily consists of replicating content across multiple domains. 3. Abusive content duplication: This includes scraper spammers, weaving or stitching (mixed and matched content to create "new" content), and bulk content replication. Fortunately, Greg Grothaus from Google's search quality team had already addressed the duplicate content penalty myth, noting that Google "tries hard to index and show pages with distinct information." It's common knowledge that Google uses a checksum-like method for initially filtering out replicated content. For example, most Web sites have a regular and print version of each article. Google only wants to serve up one copy of the content in its search results, which is predominately determined by linking prowess. Because most print-ready pages are dead-end URLs sans site navigation, it's relatively simply to equate which page Google prefers to serve up in its search results. In exceptional cases of content duplication that Google perceives as an abusive attempt to manipula
  •  
    One interesting thing that came out of SES San Jose's Duplicate Content and Multiple Site Issues session in August was the sheer volume of duplicate content on the Web. Ivan Davtchev, Yahoo's lead product manager for search relevance, said "more than 30 percent of the Web is made up of duplicate content." At first I thought, "Wow! Three out of every 10 pages consist of duplicate content on the Web." My second thought was, "Sheesh, the Web is one tangled mess of equally irrelevant content." Small wonder trust and linkage play such significant roles in determining a domain's overall authority and consequent relevancy in the search engines. Three Flavors of Bleh Davtchev went on to explain three basic types of duplicate content: 1. Accidental content duplication: This occurs when Webmasters unintentionally allow content to be replicated by non-canonicalization (define), session IDs, soft 404s (define), and the like. 2. Dodgy content duplication: This primarily consists of replicating content across multiple domains. 3. Abusive content duplication: This includes scraper spammers, weaving or stitching (mixed and matched content to create "new" content), and bulk content replication. Fortunately, Greg Grothaus from Google's search quality team had already addressed the duplicate content penalty myth, noting that Google "tries hard to index and show pages with distinct information." It's common knowledge that Google uses a checksum-like method for initially filtering out replicated content. For example, most Web sites have a regular and print version of each article. Google only wants to serve up one copy of the content in its search results, which is predominately determined by linking prowess. Because most print-ready pages are dead-end URLs sans site navigation, it's relatively simply to equate which page Google prefers to serve up in its search results. In exceptional cases of content duplication that Google perceives as an abusive attempt to manipula
5More

Beyond conventional SEO: Unravelling the mystery of the organic product carousel - Sear... - 0 views

  • How to influence the organic product carouselIn Google’s blog post, they detailed three factors that are key inputs: Structured Data on your website, providing real-time product information via Merchant Center, along with providing additional information through Manufacturer Center.This section of the article will explore Google’s guidance, along with some commentary of what I’ve noticed based on my own experiences.
  • Make sure your product markup is validatedThe key here is to make sure Product Markup with Structured Data on your page adheres to Google’s guidelines and is validated.
  • Submit your product feed to Google via Merchant CenterThis is where it starts to get interesting. By using Google’s Merchant Center, U.S. product feeds are now given the option to submit data via a new destination.The difference here for Google is that retailers are able to provide more up-to-date information about their products, rather than waiting for Google to crawl your site (what happens in step 1).Checking the box for “Surfaces across Google” gives you the ability to grant access to your websites product feed, allowing your products to be eligible in areas such as Search and Google Images.For the purpose of this study we are most interested in Search, with the Organic Product Carousel in mind. “Relevance” of information is the deciding factor of this feature.Google states that in order for this feature of Search to operate, you are not required to have a Google Ads campaign. Just create an account, then upload a product data feed.Commentary by PPC Expert Kirk Williams:“Setting up a feed in Google Merchant Center has become even more simple over time since Google wants to guarantee that they have the right access, and that retailers can get products into ads! You do need to make sure you add all the business information and shipping/tax info at the account level, and then you can set up a feed fairly easily with your dev team, a third party provider like Feedonomics, or with Google Sheets. As I note in my “Beginner’s Guide to Shopping Ads”, be aware that the feed can take up to 72 hours to process, and even longer to begin showing in SERPs. Patience is the key here if just creating a new Merchant Center… and make sure to stay up on those disapprovals as Google prefers a clean GMC account and will apply more aggressive product disapproval filters to accounts with more disapprovals. ”– Kirk WilliamsFor a client I’m working with, completing this step resulted in several of their products being added to the top 10 of the PP carousel. 1 of which is in the top 5, being visible when the SERP first loads.This meant that, in this specific scenario, the product Structured Data that Google was regularly crawling and indexing in the US wasn’t enough on it’s own to be considered for the Organic Product Carousel.Note: the products that were added to the carousel were already considered “popular” but Google just hadn’t added them in. It is not guaranteed that your products will be added just because this step was completed. it really comes down to the prominence of your product and relevance to the query (same as any other page that ranks).
  • ...2 more annotations...
  • 3. Create an additional feed via Manufacturer CenterThe next step involves the use of Google’s Manufacturer Center. Again, this tool works in the same way as Merchant Center: you submit a feed, and can add additional information.This information includes product descriptions, variants, and rich content, such as high-quality images and videos that can show within the Product Knowledge Panel.You’ll need to first verify your brand name within the Manufacturer Center Dashboard, then you can proceed to uploading your product feed.When Google references the “Product Knowledge Panel” in their release, it’s not the same type of Knowledge Panel many in the SEO industry are accustomed.This Product Knowledge Panel contains very different information compared to your standard KP that is commonly powered by Wikipedia, and appears in various capacities (based on how much data to which it has access).Here’s what this Product Knowledge Panel looks like in its most refined state, completely populated with all information that can be displayed:Type #1 just shows the product image(s), the title and the review count.Type #2 is an expansion on Type #1 with further product details, and another link to the reviews.Type #3 is the more standard looking Knowledge Panel, with the ability to share a link with an icon on the top right. This Product Knowledge Panel has a description and more of a breakdown of reviews, with the average rating. This is the evolved state where I tend to see Ads being placed within.Type #4 is an expansion of Type #3, with the ability to filter through reviews and search the database with different keywords. This is especially useful functionality when assessing the source of the aggregated reviews.Based on my testing with a client in the U.S., adding the additional information via Manufacturer Center resulted in a new product getting added to a PP carousel.This happened two weeks after submitting the feed, so there still could be further impact to come. I will likely wait longer and then test a different approach.
  • Quick recap:Organic Product Carousel features are due to launch globally at the end of 2019.Popular Product and Best Product carousels are the features to keep an eye on.Make sure your products have valid Structured Data, a submitted product feed through Merchant Center, along with a feed via Manufacturer Center.Watch out for cases where your clients brand is given a low review score due to the data sources Google has access to.Do your own testing. As Cindy Krum mentioned earlier, there are a lot of click between the Organic Product Carousel listings and your website’s product page.Remember: there may be cases where it is not possible to get added to the carousel due to an overarching “prominence” factor. Seek out realistic opportunities.
1More

Google SEO Test - Google Prefers Valid HTML & CSS | Hobo - 0 views

  •  
    Well - the result is clear. From these 4 pages Google managed to pick the page with valid css and valid html as the preffered page to include in it's index! Ok, it might be a bit early to see if the four pages in the test eventually appear in Google but on first glance it appears Google spidered the pages, examined them, applied duplicate content filters as expected, and selected one to include in search engine results. It just happens that Google seems to prefer the page with valid code as laid down by the W3C (World Wide Web Consortium). The W3C was started in 1994 to lead the Web to its full potential by developing common protocols that promote its evolution and ensure its interoperability. What is the W3C? * W3C Stands for the World Wide Web Consortium * W3C was created in October 1994 * W3C was created by Tim Berners-Lee * W3C was created by the Inventor of the Web * W3C is organized as a Member Organization * W3C is working to Standardize the Web * W3C creates and maintains WWW Standards * W3C Standards are called W3C Recommendations How The W3C Started The World Wide Web (WWW) began as a project at the European Organization for Nuclear Research (CERN), where Tim Berners-Lee developed a vision of the World Wide Web. Tim Berners-Lee - the inventor of the World Wide Web - is now the Director of the World Wide Web Consortium (W3C). W3C was created in 1994 as a collaboration between the Massachusetts Institute of Technology (MIT) and the European Organization for Nuclear Research (CERN), with support from the U.S. Defense Advanced Research Project Agency (DARPA) and the European Commission. W3C Standardising the Web W3C is working to make the Web accessible to all users (despite differences in culture, education, ability, resources, and physical limitations). W3C also coordinates its work with many other standards organizations such as the Internet Engineering Task Force, the Wireless Application Protocols (WAP) Forum an
4More

Can Google Ignore Portions Of Your Site For Accessing Quality - 0 views

  • how long does a site need to wait for Google to process a quality change and the answer was at least two months - one month won't cut it. And this applies to both Google Search and Google Discover, it isn't different. John said he would guess for a large site a couple of months would give Google a chance to understand it better. A month is too little to see a significant impact.
  • John then goes into explaining that for a site that produces a lot of new content often, then Google will "focus essentially on the newer content on the main category sections of the web site." Because of the structure of your site, you are giving your newer content more prominence on your web site and Google will focus its crawling and indexing more on that newer content. John said if you are constantly creating new content, then that is where Google will shift its focus on.
  • if you're looking at an overall quality issue with regards to your website and you have kind of this reference part that's really important for your website but it's really low quality then we will still balance that low quality part with your newer quality news content and try to find some some middle ground there with regards to how we understand the quality of your website overall.
  • ...1 more annotation...
  • John has said that Google only judges sites based on what it indexes of that site. And if Google is not indexing big portions of your site, it won't judge those portions. Get it? So if Google is focused on indexing newer content, based on how you structure your web site then Google might not be indexing that low quality content from ages ago anymore. That older lower quality content won't be ranking in Google but at the same time, it won't be dragging down your site's quality. Again, "it depends" on your site and specific situation for your web site.
1More

Google Docs Gains E-Commerce Option - Google Blog - InformationWeek - 0 views

  • Google Docs Gains E-Commerce Option Posted by Thomas Claburn, Jul 30, 2009 06:10 PM Google (NSDQ: GOOG) on Thursday released the Google Checkout store gadget, software that allows any Google Docs user to create an online store and sell items using a Google spreadsheet. "Using new Spreadsheet Data APIs, we've integrated Google Docs and Google Checkout to make online selling a breeze," explains Google Checkout strategist Mike Giardina in a blog post. "In three simple steps, you'll be able to create an online store that's powered by Google Checkout and has inventory managed and stored in a Google spreadsheet." Giardina insists that the process is simple and can be completed in less than five minutes. To use the gadget, Google users first have to sign up for Google Checkout. They can then list whatever they want to sell in a Google spreadsheet and insert the Checkout gadget, which can also be used on Google Sites, Blogger, and iGoogle.
1More

Google Confirms "Mayday" Update Impacts Long Tail Traffic - 0 views

  • Google Confirms “Mayday” Update Impacts Long Tail Traffic May 27, 2010 at 11:02am ET by Vanessa Fox Google made between 350 and 550 changes in its organic search algorithms in 2009. This is one of the reasons I recommend that site owners not get too fixated on specific ranking factors. If you tie construction of your site to any one perceived algorithm signal, you’re at the mercy of Google’s constant tweaks. These frequent changes are one reason Google itself downplays algorithm updates. Focus on what Google is trying to accomplish as it refines things (the most relevant, useful results possible for searchers) and you’ll generally avoid too much turbulence in your organic search traffic. However, sometimes a Google algorithm change is substantial enough that even those who don’t spend a lot of time focusing on the algorithms notice it. That seems to be the case with what those discussing it at Webmaster World have named “Mayday”. Last week at Google I/O, I was on a panel with Googler Matt Cutts who said, when asked during Q&A,  ”this is an algorithmic change in Google, looking for higher quality sites to surface for long tail queries. It went through vigorous testing and isn’t going to be rolled back.” I asked Google for more specifics and they told me that it was a rankings change, not a crawling or indexing change, which seems to imply that sites getting less traffic still have their pages indexed, but some of those pages are no longer ranking as highly as before. Based on Matt’s comment, this change impacts “long tail” traffic, which generally is from longer queries that few people search for individually, but in aggregate can provide a large percentage of traffic. This change seems to have primarily impacted very large sites with “item” pages that don’t have many individual links into them, might be several clicks from the home page, and may not have substantial unique and value-added content on them. For instance, ecommerce sites often have this structure. The individual product pages are unlikely to attract external links and the majority of the content may be imported from a manufacturer database. Of course, as with any change that results in a traffic hit for some sites, other sites experience the opposite. Based on Matt’s comment at Google I/O, the pages that are now ranking well for these long tail queries are from “higher quality” sites (or perhaps are “higher quality” pages). My complete speculation is that perhaps the relevance algorithms have been tweaked a bit. Before, pages that didn’t have high quality signals might still rank well if they had high relevance signals. And perhaps now, those high relevance signals don’t have as much weight in ranking if the page doesn’t have the right quality signals. What’s a site owner to do? It can be difficult to create compelling content and attract links to these types of pages. My best suggestion to those who have been hit by this is to isolate a set of queries for which the site now is getting less traffic and check out the search results to see what pages are ranking instead. What qualities do they have that make them seen as valuable? For instance, I have no way of knowing how amazon.com has faired during this update, but they’ve done a fairly good job of making individual item pages with duplicated content from manufacturer’s databases unique and compelling by the addition of content like of user reviews. They have set up a fairly robust internal linking (and anchor text) structure with things like recommended items and lists. And they attract external links with features such as the my favorites widget. From the discussion at the Google I/O session, this is likely a long-term change so if your site has been impacted by it, you’ll likely want to do some creative thinking around how you can make these types of pages more valuable (which should increase user engagement and conversion as well). Update on 5/30/10: Matt Cutts from Google has posted a YouTube video about the change. In it, he says “it’s an algorithmic change that changes how we assess which sites are the best match for long tail queries.” He recommends that a site owner who is impacted evaluate the quality of the site and if the site really is the most relevant match for the impacted queries, what “great content” could be added, determine if the the site is considered an “authority”, and ensure that the page does more than simply match the keywords in the query and is relevant and useful for that query. He notes that the change: has nothing to do with the “Caffeine” update (an infrastructure change that is not yet fully rolled out). is entirely algorithmic (and isn’t, for instance, a manual flag on individual sites). impacts long tail queries more than other types was fully tested and is not temporary
2More

Problem with Google indexing secure pages, dropping whole site. - Search Engine Watch F... - 0 views

  • Coincidentally Google e-mailed me today saying to use a 301 redirect for the https page to http. This is the first thought I had and I tried to find code to do this for days when this problem first occurred-I never found it.
  •  
    04-25-2006 Chris_D's Avatar Chris_D Chris_D is offline Oversees: Searching Tips & Techniques Join Date: Jun 2004 Location: Sydney Australia Posts: 1,103 Chris_D has much to be proud ofChris_D has much to be proud ofChris_D has much to be proud ofChris_D has much to be proud ofChris_D has much to be proud ofChris_D has much to be proud ofChris_D has much to be proud ofChris_D has much to be proud ofChris_D has much to be proud of Hi docprego, Set your browser to reject cookies, and then surf your site (I'm assuming it's the one in your profile). now look at your URLS when you reject cookies..... /index.php?cPath=23&osCsid=8cfa2cb83fa9cc92f78db5f4 4abea819 /about_us.php?osCsid=33d0c44757f97f8d5c9c68628eee0e 2b You are appending Cookie strings to the URLS for user agents that reject cookie. That is the biggest problem. Get someone who knows what they are doing to look at your server configuration - its the problem - not Google. Google has always said: Quote: Use a text browser such as Lynx to examine your site, because most search engine spiders see your site much as Lynx would. If fancy features such as JavaScript, cookies, session IDs, frames, DHTML, or Flash keep you from seeing all of your site in a text browser, then search engine spiders may have trouble crawling your site. Allow search bots to crawl your sites without session IDs or arguments that track their path through the site. These techniques are useful for tracking individual user behavior, but the access pattern of bots is entirely different. Using these techniques may result in incomplete indexing of your site, as bots may not be able to eliminate URLs that look different but actually point to the same page. http://www.google.com/webmasters/guidelines.html You've also excluded a few pages in your http port 80 non secure robots.txt which I would have expected that you want to be indexed - like /about_us.php From an information architecture perspective, as Marcia said - put the stuff that n
20More

70+ Best Free SEO Tools (As Voted-for by the SEO Community) - 1 views

  • Soovle — Scrapes Google, Bing, Yahoo, Wikipedia, Amazon, YouTube, and Answers.com to generate hundreds of keyword ideas from a seed keyword. Very powerful tool, although the UI could do with some work.Hemingway Editor — Improves the clarity of your writing by highlighting difficult to read sentences, “weak” words, and so forth. A must-have tool for bloggers (I use it myself).
  • Yandex Metrica — 100% free web analytics software. Includes heat maps, form analytics, session reply, and many other features you typically wouldn’t see in a free tool.
  • For example, two of my all-time favourite tools are gInfinity (Chrome extension) and Chris Ainsworth’s SERPs extraction bookmarklet.By combining these two free tools, you can extract multiple pages of the SERPs (with meta titles + descriptions) in seconds.
  • ...17 more annotations...
  • Keyword Mixer — Combine your existing keywords in different ways to try and find better alternatives. Also useful for removing duplicates from your keywords list.Note: MergeWords does (almost) exactly the same job albeit with a cleaner UI. However, there is no option to de-dupe the list.
  • LSIgraph.com — Latent Semantic Indexing (LSI) keywords generator. Enter a seed keyword, and it’ll generate a list of LSI keywords (i.e. keywords and topics semantically related to your seed keyword). TextOptimizer is another very similar tool that does roughly the same job.
  • Small SEO Tools Plagiarism Checker — Detects plagiarism by scanning billions of documents across the web. Useful for finding those who’ve stolen/copied your work without attribution.
  • iSearchFrom.com — Emulate a Google search using any location, device, or language. You can customise everything from SafeSearch settings to personalised search.
  • Delim.co — Convert a comma-delimited list (i.e. CSV) in seconds. Not necessarily an SEO tool per se but definitely very useful for many SEO-related tasks.
  • Am I Responsive? — Checks website responsiveness by showing you how it looks on desktop, laptop, tablet, and mobile.
  • SERPLab — Free Google rankings checker. Updates up to 50 keywords once every 24 hours (server permitting).
  • Varvy — Checks whether a web page is following Google’s guidelines. If your website falls short, it tells you what needs fixing.
  • JSON-LD Schema Generator — JSON-LD schema markup generator. It currently supports six markup types including: product, local business, event, and organization.
  • KnowEm Social Media Optimizer — Analyses your web page to see if it’s well-optimised for social sharing. It checks for markup from Facebook, Google+, Twitter, and LinkedIn.
  • Where Goes? — Shows you the entire path of meta-refreshes and redirects for any URL. Very useful for diagnosing link issues (e.g. complex redirect chains).
  • Google Business Review Link Generator — Generates a direct link to your Google Business listing. You can choose between a link to all current Google reviews, or to a pre-filled 5-star review box.
  • PublicWWW — Searches the web for pages using source code-based footprints. Useful for finding your competitors affiliates, websites with the same Google Analytics code, and more.
  • Keywordtool.io — Scrapes Google Autosuggest to generate 750 keyword suggestions from one seed keyword. It can also generate keyword suggestions for YouTube, Bing, Amazon, and more.
  • SERPWatcher — Rank tracking tool with a few unique metrics (e.g. “dominance index”). It also shows estimated visits and ranking distribution charts, amongst other things.
  • GTMetrix — Industry-leading tool for analysing the loading speed of your website. It also gives actionable recommendations on how to make your website faster.
  • Mondovo — A suite of SEO tools covering everything from keyword research to rank tracking. It also generates various SEO reports.SEO Site Checkup — Analyse various on-page/technical SEO issues, monitor rankings, analyse competitors, create custom white-label reports, and more.
1More

Google Insights Forecasts the Future - Google Blog - InformationWeek - 0 views

  • Google Insights Forecasts the Future Posted by Thomas Claburn, Aug 17, 2009 05:46 PM Google (NSDQ: GOOG) has enhanced Google Insights for Search, its search term data analysis tool, to help users see into the future. Now available in 39 languages, Google Insights for Search includes a new forecasting feature that can extrapolate a search term's future popularity based on its past performance. For search terms with a lot of historical data, Google Insights for Search can project a likely trend. It's not a perfect prediction of what's to come, but it may be useful in certain circumstances. Google has also added an animated map that allows users to see how search query volume changes over time in specific geographic regions. Graphs generated using Google Insights for Search can be presented on any Web page or iGoogle page using an embeddable gadget. This is particularly use for tracking the ebb and flow of online discussion about a particular topic.
2More

Does Adding Keywords in an Image Filename Help Ranking? - Sterling Sky Inc - 0 views

  • unlike photos on your website, photos on Google My Business listings don’t get indexed or included in Google Images search.  If you take a photo from your GMB listing and run it through Google Images search, it will return no results, provided the image isn’t also hosted anywhere else online.
  •  
    "unlike photos on your website, photos on Google My Business listings don't get indexed or included in Google Images search.  If you take a photo from your GMB listing and run it through Google Images search, it will return no results, provided the image isn't also hosted anywhere else online."
9More

How Google's Selective Link Priority Impacts SEO (2023 Study) - 0 views

  • How Google’s Selective Link Priority Impacts SEO (2023 Study)
  • First Link Priority
  • only have selected one of the links from a given page.
  • ...6 more annotations...
  • Google only counted the first anchor text
  • So even if you manage to figure out how we currently do it today, then that’s not necessarily how we’ll do it tomorrow, or how it always is across all websites.
  • Test #1 Takeaway: Google seems to be able to count multiple anchor texts on the same page to the same target, at least if one of the links is an image.
  • Test #2 Takeaway: When Google encountered two text links followed by an image link, Google indexed the first text and image anchors only.
  • Test #3 Takeaway: When Google encountered two text links followed by an image link and finally another text link, Google indexed the first text and image anchors only.
  • How to Optimize For Google’s Selective Link Priority Let’s be clear: Selective Link Priority most likely isn’t going to make a huge difference in your SEO strategy, but it can make a difference, especially in tie-breaker situations. In particular, here are five internal linking practices in a Selective Link Priority world: Be aware when linking on a page multiple times to the same URL that Google may not “count” all of your anchor text. When in doubt, you should likely prioritize both the first text link and image links on the page. Remember that each link to a URL—regardless of anchor text—has the potential to increase that URL’s PageRank. Don’t leave image alt attributes empty, and remember to vary them from any text link anchors. Not only can Google index the alt attribute as a separate anchor, but this gives you the chance to further increase your anchor text variations. Sites with smaller external link profiles may wish to limit the number of navigational links in preference of in-body text links. The reason is that if Google does indeed tend to prefer the first links on the page—and these are navigational—this limits the number of anchor text variations you can send to any page. (This isn’t a hard-and-fast rule. In fact, it’s a nuanced, complex subject that may warrant a whole other post.) The most important thing to remember is this – anchor text is a powerful ranking signal, even for internal links. Carefully choosing your anchor text—while avoiding over-optimization—can make a difference in winning SEO. If your SEO game is otherwise strong, you may be able to get away with ignoring Google’s Selective Link Priority rules (as most sites do already.) But you should at least be aware of how it works and what it means to your strategy.
1More

Google begins rolling out mobile-first indexing to more sites - Search Engine Land - 0 views

  • This is the first time Google has confirmed it is moving a large number of sites to this mobile-first indexing process. Google did tell us last October that a limited number of sites had been moved over. But this Google announcement makes it sound like the process of mobile-first indexing on a larger scale has already begun.
1More

Evidence of Page Level Google Penalties? - 0 views

  • June 18, 2009 Evidence of Page Level Google Penalties? Richard at SEO Gadget showed how Google seemed to have penalized specific pages of his site from ranking in the Google index. The penalty seemed to be fair, in that there were nasty comments that slipped through his comment spam filter. The drop in traffic can be seen by the keyword phrases that page ranked well for. He noticed a ~70% drop in traffic for that phrase, which in his case resulted in a 15% drop in his Google traffic and a 5% drop in overall traffic. What I find extra fun is that a Google Search Quality Analyst, @filiber, tweeted: Google Page level penalty for comment spam – rankings and traffic drop http://bit.ly/JNAly (via @AndyBeard) <- interesting read! Of course that is not admission to this as a fact, but it wouldn't be too hard to believe that bad comments caused such a decline. Now, I don't think this would be considered a keyword-specific penalty, which most SEOs believe in, but rather a specific page being penalized. Forum discussion at Sphinn.
1More

Google Working on Faster, More Caffeinated Search Engine - MarketingVOX - 0 views

  • Google Working on Faster, More Caffeinated Search Engine Click to enlarge Google announced today that it has been working on a faster search engine that will improve results for web developers and power searchers. Dubbed Caffeine, the new project focuses on next-generation infrastructure and seeks to improve performance in a host of areas including size, indexing speed, accuracy and comprehensiveness. Could this also be seen as a step towards improving access to the Deep Web? For now though, developers are being asked to go and check out the http://www2.sandbox.google.com/ and try a few searches with it. Then, compare those results with those found on the current Google site. If a "Dissatisfied? Help us improve." link displays, click on it and type your feedback in the text box along with the word caffeine. Since it's still a work in progress, Google engineers will be monitoring all feedback.
15More

Everything Publishers Need to Know About URLs - 0 views

  • if you’re currently getting good traffic from Google News and Top Stories, don’t change any part of your domain name.
  • don’t change section URLs unless you really need to.
  • Including folder names in the URL can help Google identify relevant entities that apply to the section, but there doesn’t need to be a hierarchical relationship.
  • ...12 more annotations...
  • Do article URLs need dates in them?No. If you currently use dates in your article URLs (like theguardian.com does), you don’t need to remove them. It’s fine to have them, but there may be a small downside with regards to evergreen content that you want to rank beyond the news cycle;
  • Should article URLs have a folder structure?This is optional, but it might help. Google likes to see relevant entities that are mentioned in an article reflected in the URL
  • Can the URL be different from the headline?Yep, as long as they convey the same meaning.
  • Can you use special characters in a URL?Short answer; yes, but you shouldn’t.
  • Long answer; special characters are often processed just fine, but sometimes can lead to issues when a character needs to be encoded or is otherwise not easily parsed. It ’s better to keep things simple and stick to basic characters. Use the standard alphabet and limit your non-text characters to hyphens and slashes.
  • Is capitalisation okay?This is one of those grey areas where you’ll want to keep things as simple as possible. For Google, a capital letter is a different character than the lowercase version.
  • What about file extensions like .html?This question can be relevant if you have a website that still uses extensions like .php or .html at the end of a webpage URL.Modern web technology doesn’t require file extensions anymore. Whether your article ends in .html or with a slash (which, technically, makes it a folder), or ends without any notation at all - it really doesn’t matter. All those URL patterns work, and they can all perform just as well in Google.
  • Can you use parameters in article URLs?You can, but you shouldn’t. In its Publisher Center documentation concerning redirects, Google specifically advises against using the ‘?id=’ parameter in your article URLs.
  • Do my article URLs need a unique ID?No. This is a leftover from the early days of Google News, when there was an explicit requirement for article URLs to contain a unique ID number that was at least 3 digits long.
  • How long should my URL be?As long as you want, up to the rather extreme 2048-character limit built into most browsers. There’s little correlation between URL length and article performance in Google’s news ecosystem.
  • For almost any other purpose, changing existing URLs is generally a Bad Idea.
  • Over the years Google has given conflicting information about this, though recently they seem to have standardised on “no link value is lost in a redirect” which I’ll admit I’m a little skeptical of.Regardless, the advice is the same: don’t change URLs for any piece of indexed content unless you have a damn good reason to.This is why site migrations are such a trepidatious enterprise. Changing a site’s tech stack often means changing page URLs, which can cause all sorts of SEO problems especially for news publishers.
1 - 20 of 111 Next › Last »
Showing 20 items per page